[Amazon FSx for NetApp ONTAP] FlexCloneで書き込み可能なクローンボリュームを作ってみた

[Amazon FSx for NetApp ONTAP] FlexCloneで書き込み可能なクローンボリュームを作ってみた

FlexCloneを有効活用して開発スピードアップやコスト削減に繋げよう
Clock Icon2024.04.08

検証用に一時的に使用するボリュームをサクッと用意したい

こんにちは、のんピ(@non____97)です。

皆さんは検証用に一時的に使用するボリュームをサクッと用意したいなと思ったことはありますか? 私はあります。

既に存在しているデータを使って書き込みを伴う検証を行う場合、影響範囲を広げないために対象データをコピーして行うことが多いと思います。しかし、コピー対象のデータサイズが大きい場合は、そのコピーにかかる時間が非常にもったいないです。特に定期的に本番環境のデータにリフレッシュしたい場合はコピー時間は気になるでしょう。

そのようなお悩みはONTAPのFlexCloneが解決できそうです。

FlexCloneとは対象ボリュームのSnapshotを利用して、書き込み可能なクローンボリュームを作成する機能です。クローンボリューム上でデータを変更しても、クローン元のボリュームには影響を与えません。Autoraのクローンと似たような使い勝手と思っていただくと良いと思います。

FlexCloneを使用することで、テスト環境や開発環境用はもちろん、ユーザー毎のトレーニング環境のボリュームを高速に準備することができます。

また、障害やセキュリティインシデントなどでボリューム内のデータを何か調査が必要になった場合においても、インシデント発生前のSnapshotからクローンボリュームを作成して業務は継続させた上で、オリジナルのボリュームでフォレンジック調査を継続することも可能です。

実際に触ってみたので紹介します。

いきなりまとめ

  • FlexCloneとはクローン元ボリュームのSnapshotを利用して高速にクローンボリュームを作成する機能
    • クローンボリュームを作成するタイミングで新規にSnapshotを作成することも、既存のSnapshotを選択することも可能
  • 実際のデータのコピーは行われないので即座にクローンボリュームが作成される
  • クローンへ新たに書き込まれたデータ分だけのサイズが消費される
  • クローンボリュームのクローンボリュームも作成可能
  • ボリューム単位クローンだけでなく、ボリューム内のファイルやLUNのクローンも作成可能
  • クローンボリューム作成後にクローン元ボリュームを削除したい場合は、依存関係を切ることで対応可能
    • FlexCloneをネストさせている場合、中間のクローンボリュームをスプリットさせることはできない

FlexCloneとは

FlexCloneは迅速かつ効率的にクローンボリュームを作成する機能です。もう少し裏側を紹介します。

FlexCloneではクローン元ボリュームのSnapshotを利用し、クローンボリュームとデータブロックを共有します。たとえ1TBのデータが保存されているボリュームのクローンを作成しても、クローンボリュームに変更を加えなければ、消費する物理ストレージサイズは1TBです。クローンボリュームに上で更新分だけが消費されます。

Astra-DevOps-UC3-FlexClone

抜粋 : NetApp FlexCloneテクノロジでソフトウェア開発を高速化

Best_practices_using_this_template

抜粋 : NetApp Tech Community ONLINE vol.46

実際のデータのコピーは発生しないので、数秒でクローンボリュームを作成できます。重複したデータブロックを保持しないのでコスト削減にもつながりますね。

やってみた

テストファイルの書き込み

実際にやってみます。

テストファイル作成前のボリューム、aggregate、Storage Efficiencyの状態は以下のとおりです。物理ストレージ消費量の変化をわかりやすくするため、Storage Efficiencyは無効にし、Tiering Policyはnoneにしています。

::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

::*> version
NetApp Release 9.13.1P7D3: Wed Feb 14 13:11:46 UTC 2024

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state    progress          last-op-begin            last-op-end              last-op-size changelog-usage changelog-size logical-data-size
------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm     vol1   Disabled Idle for 00:12:28 Sun Apr 07 01:38:49 2024 Sun Apr 07 01:38:49 2024 0B           0%              0B             308KB

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state    policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ -------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Disabled auto   false       false              efficient               false         false           true                              false                           false

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used  percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percentlogical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ----- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ ------------------------------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 121.6GB   128GB           121.6GB 308KB 0%           0B                 0%                         0B                  308KB         0%                    308KB        0%-                 308KB               none           0B                                  0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId009351b227391d1f1-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 136KB
                               Total Physical Used: 284KB
                    Total Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used Without Snapshots: 136KB
Total Data Reduction Physical Used Without Snapshots: 284KB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 136KB
Total Data Reduction Physical Used without snapshots and flexclones: 284KB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 616KB
Total Physical Used in FabricPool Performance Tier: 4.80MB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 616KB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.80MB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 136KB
               Physical Space Used for All Volumes: 136KB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 284KB
              Physical Space Used by the Aggregate: 284KB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 0B
             Physical Size Used by Snapshot Copies: 0B
              Snapshot Volume Data Reduction Ratio: 1.00:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     860.6GB   861.8GB 1.12GB   46.98MB       0%                    0B                          0%                                  0B                   0B                           0B              0%                 0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              1.12GB         0%
      Aggregate Metadata                             4.14MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    46.48GB         5%

      Total Physical Used                           46.98MB         0%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

1GiBのテストファイルを書き込みます。

$ sudo mount -t nfs svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1
$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1 nfs4  122G  320K  122G   1% /mnt/fsxn/vol1

$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/random_pattern_binary_block_1GiB bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.32884 s, 170 MB/s

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1 nfs4  122G  1.1G  121G   1% /mnt/fsxn/vol1

テストファイル書き込み後のボリューム、aggregateの状態は以下のとおりです。論理的にも物理的にも1GiB消費していることが分かります。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 120.6GB   128GB           121.6GB 1.00GB 0%           0B                 0%                         0B                  1.00GB        1%                    1.00GB       1% -                 1.00GB              none           0B                                  0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId009351b227391d1f1-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 1.00GB
                               Total Physical Used: 1.00GB
                    Total Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used Without Snapshots: 1.00GB
Total Data Reduction Physical Used Without Snapshots: 1.00GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 1.00GB
Total Data Reduction Physical Used without snapshots and flexclones: 1.00GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 1.00GB
Total Physical Used in FabricPool Performance Tier: 1.01GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.00GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.01GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 1.00GB
               Physical Space Used for All Volumes: 1.00GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 1.00GB
              Physical Space Used by the Aggregate: 1.00GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 0B
             Physical Size Used by Snapshot Copies: 0B
              Snapshot Volume Data Reduction Ratio: 1.00:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 1

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     859.6GB   861.8GB 2.13GB   1.06GB        0%                    0B                          0%                                  0B                   0B                           0B              0%                 0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              2.12GB         0%
      Aggregate Metadata                             5.43MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    47.48GB         5%

      Total Physical Used                            1.06GB         0%


      Total Provisioned Space                         129GB        14%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

FlexCloneボリュームの作成

それではFlexCloneボリュームを作成します。

作成方法は以下NetApp公式ドキュメントが参考になります。

::*> snapshot show -volume vol1
There are no entries matching your query.

::*> volume clone
    create           sharing-by-split show             split

::*> volume clone show
This table is currently empty.

:*> volume clone create -parent-volume vol1 -flexclone vol1_clone -junction-path /vol1_clone
[Job 44] Job succeeded: Successful

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW

::*> volume clone show -instance

                              Vserver Name: svm
                          FlexClone Volume: vol1_clone
                            FlexClone Type: RW
                  FlexClone Parent Vserver: svm
                   FlexClone Parent Volume: vol1
                 FlexClone Parent Snapshot: clone_vol1_clone.2024-04-07_020316.0
                    FlexClone Volume State: online
                             Junction Path: /vol1_clone
                           Junction Active: true
                     Space Guarantee Style: none
                 Space Guarantee In Effect: true
                       FlexClone Aggregate: aggr1
                     FlexClone Data Set ID: 1027
              FlexClone Master Data Set ID: 2163179381
                            FlexClone Size: 128GB
                                 Used Size: 1.00GB
                            Split Estimate: 1.01GB
                            Blocks Scanned: -
                            Blocks Updated: -
                                   Comment:
                     QoS Policy Group Name: -
            QoS Adaptive Policy Group Name: -
                       Caching Policy Name: -
                        Parent volume type: READ_WRITE
Inherited Physical Used from Base Snapshot: 1.00GB
      Inherited Savings from Base Snapshot: 0B
                 FlexClone Used Percentage: 0%
                     Vserver DR Protection: -
                       Percentage Complete: -
                          Volume-Level UID: -
                          Volume-Level GID: -
                     UUID of the FlexGroup: -
              FlexGroup Master Data Set ID: -
                           FlexGroup Index: -
   Maximum size of a FlexGroup Constituent: -
                   Constituent Volume Role: -
           Is Active FlexGroup Constituent: true
                     Is Constituent Volume: false
                     Is Volume a FlexGroup: false
                     Extended Volume Style: flexvol
                          Type of Workflow: auto
                             SnapLock Type: non-snaplock

::*>
::*> volume clone sharing-by-split show
This table is currently empty.

5秒ほどで作成完了しました。

今回は同じSVMにクローンボリュームを作成しましたがvolume clone createのコマンドリファレンスを確認すると、-parent-vserverとの指定が可能なのでクローン元ボリュームは異なるSVMでも問題なさそうです。

また、Snapshot一覧を確認すると、Snapshotが取得されていることを確認できました。

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     164KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     220KB     0%    0%
2 entries were displayed.

FlexCloneボリューム作成後のボリューム、aggregateの状態は以下のとおりです。物理的な消費量は1.06GBから1.10GBとほとんど変わっていないことが分かります。また、クローンボリュームvol_cloneの物理データ消費量は1.27MBであることも分かります。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 120.6GB   128GB           121.6GB 1.00GB 0%           0B                 0%                         0B                  1.00GB        1%                    1.00GB       1% -                 1.00GB              none           0B                                  0%
svm     vol1_clone
               128GB 120.6GB   128GB           121.6GB 1.00GB 0%           0B                 0%                         0B                  1.27MB        0%                    1.00GB       1% -                 1.00GB              none           0B                                  0%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId009351b227391d1f1-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 6.03GB
                               Total Physical Used: 1.00GB
                    Total Storage Efficiency Ratio: 6.00:1
Total Data Reduction Logical Used Without Snapshots: 2.01GB
Total Data Reduction Physical Used Without Snapshots: 1.00GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.00:1
Total Data Reduction Logical Used without snapshots and flexclones: 1.00GB
Total Data Reduction Physical Used without snapshots and flexclones: 1.00GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 6.03GB
Total Physical Used in FabricPool Performance Tier: 1.01GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 5.94:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.00GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.01GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 2.01GB
               Physical Space Used for All Volumes: 2.01GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 1.00GB
              Physical Space Used by the Aggregate: 1.00GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 4.02GB
             Physical Size Used by Snapshot Copies: 852KB
              Snapshot Volume Data Reduction Ratio: 4945.92:1
            Logical Size Used by FlexClone Volumes: 1.00GB
          Physical Sized Used by FlexClone Volumes: 1.27MB
             FlexClone Volume Data Reduction Ratio: 810.30:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 2447.64:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     859.6GB   861.8GB 2.17GB   1.10GB        0%                    0B                          0%                                  0B                   0B                           0B              0%                 0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              2.17GB         0%
      Aggregate Metadata                             7.48MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    47.53GB         5%

      Total Physical Used                            1.10GB         0%


      Total Provisioned Space                         257GB        28%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

FlexCloneボリュームへの書き込み

続いて、FlexCloneボリュームへの書き込みをします。

$ sudo mkdir -p /mnt/fsxn/vol1_clone
$ sudo mount -t nfs svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1_clone /mnt/fsxn/vol1_clone$ df -hT -t nfs4Filesystem                                                                         Type  Size  Used Avail Use% Mounted on
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1       nfs4  122G  1.1G  121G   1% /mnt/fsxn/vol1
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1_clone nfs4  122G  1.1G  121G   1% /mnt/fsxn/vol1_clone

$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1_clone/random_pattern_binary_block_2GiB bs=1M count=20482048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 13.7145 s, 157 MB/s

$ df -hT -t nfs4
Filesystem                                                                         Type  Size  Used Avail Use% Mounted on
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1       nfs4  122G  1.1G  121G   1% /mnt/fsxn/vol1

FlexCloneボリュームへの書き込み後のボリューム、aggregateの状態は以下のとおりです。物理的な消費量は1.10GBから3.12GBと、書き込んだ2GiB分だけ増加していることが分かります。また、クローンボリュームvol_cloneの物理データ消費量は1.27MBから3.01に増加していることも分かります。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 120.6GB   128GB           121.6GB 1.00GB 0%           0B                 0%                         0B                  1.00GB        1%                    1.00GB       1% -                 1.00GB              none           0B                                  0%
svm     vol1_clone
               128GB 118.6GB   128GB           121.6GB 3.01GB 2%           0B                 0%                         0B                  2.01GB        2%                    3.01GB       2% -                 3.01GB              none           0B                                  0%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId009351b227391d1f1-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 8.03GB
                               Total Physical Used: 3.01GB
                    Total Storage Efficiency Ratio: 2.67:1
Total Data Reduction Logical Used Without Snapshots: 4.02GB
Total Data Reduction Physical Used Without Snapshots: 3.01GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.33:1
Total Data Reduction Logical Used without snapshots and flexclones: 1.00GB
Total Data Reduction Physical Used without snapshots and flexclones: 1GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 8.04GB
Total Physical Used in FabricPool Performance Tier: 3.03GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.65:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.00GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.02GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 4.02GB
               Physical Space Used for All Volumes: 4.02GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 3.01GB
              Physical Space Used by the Aggregate: 3.01GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 4.02GB
             Physical Size Used by Snapshot Copies: 900KB
              Snapshot Volume Data Reduction Ratio: 4682.14:1
            Logical Size Used by FlexClone Volumes: 3.01GB
          Physical Sized Used by FlexClone Volumes: 2.01GB
             FlexClone Volume Data Reduction Ratio: 1.50:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.49:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     857.6GB   861.8GB 4.19GB   3.12GB        0%                    0B                          0%                                  0B                   0B                           0B              0%                 0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              4.18GB         0%
      Aggregate Metadata                            13.55MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    49.54GB         5%

      Total Physical Used                            3.12GB         0%


      Total Provisioned Space                         257GB        28%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW

::*> volume clone show -instance

                              Vserver Name: svm
                          FlexClone Volume: vol1_clone
                            FlexClone Type: RW
                  FlexClone Parent Vserver: svm
                   FlexClone Parent Volume: vol1
                 FlexClone Parent Snapshot: clone_vol1_clone.2024-04-07_020316.0
                    FlexClone Volume State: online
                             Junction Path: /vol1_clone
                           Junction Active: true
                     Space Guarantee Style: none
                 Space Guarantee In Effect: true
                       FlexClone Aggregate: aggr1
                     FlexClone Data Set ID: 1027
              FlexClone Master Data Set ID: 2163179381
                            FlexClone Size: 128GB
                                 Used Size: 3.01GB
                            Split Estimate: 1.00GB
                            Blocks Scanned: -
                            Blocks Updated: -
                                   Comment:
                     QoS Policy Group Name: -
            QoS Adaptive Policy Group Name: -
                       Caching Policy Name: -
                        Parent volume type: READ_WRITE
Inherited Physical Used from Base Snapshot: 1.00GB
      Inherited Savings from Base Snapshot: 0B
                 FlexClone Used Percentage: 2%
                     Vserver DR Protection: -
                       Percentage Complete: -
                          Volume-Level UID: -
                          Volume-Level GID: -
                     UUID of the FlexGroup: -
              FlexGroup Master Data Set ID: -
                           FlexGroup Index: -
   Maximum size of a FlexGroup Constituent: -
                   Constituent Volume Role: -
           Is Active FlexGroup Constituent: true
                     Is Constituent Volume: false
                     Is Volume a FlexGroup: false
                     Extended Volume Style: flexvol
                          Type of Workflow: auto
                             SnapLock Type: non-snaplock

FlexClone元のボリュームへの書き込み

FlexClone元のボリュームへの書き込みも試します。

$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/random_pattern_binary_block_3GiB bs=1M count=30723072+0 records in
3072+0 records out
3221225472 bytes (3.2 GB, 3.0 GiB) copied, 20.8923 s, 154 MB/s

$ df -hT -t nfs4
Filesystem                                                                         Type  Size  Used Avail Use% Mounted on
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1       nfs4  122G  4.1G  118G   4% /mnt/fsxn/vol1

FlexCloneボリュームへの書き込み後のボリューム、aggregateの状態は以下のとおりです。FlexCloneボリュームの物理ストレージ消費量は変動ないことが分かります。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 117.6GB   128GB           121.6GB 4.02GB 3%           0B                 0%                         0B                  4.02GB        3%                    4.02GB       3% -                 4.02GB              none           0B                                  0%
svm     vol1_clone
               128GB 118.6GB   128GB           121.6GB 3.01GB 2%           0B                 0%                         0B                  2.01GB        2%                    3.01GB       2% -                 3.01GB              none           0B                                  0%
2 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId009351b227391d1f1-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 11.05GB
                               Total Physical Used: 6.02GB
                    Total Storage Efficiency Ratio: 1.83:1
Total Data Reduction Logical Used Without Snapshots: 7.03GB
Total Data Reduction Physical Used Without Snapshots: 6.02GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.17:1
Total Data Reduction Logical Used without snapshots and flexclones: 4.01GB
Total Data Reduction Physical Used without snapshots and flexclones: 4.01GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 11.05GB
Total Physical Used in FabricPool Performance Tier: 6.05GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.83:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.02GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.04GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 7.03GB
               Physical Space Used for All Volumes: 7.03GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 6.02GB
              Physical Space Used by the Aggregate: 6.02GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 4.02GB
             Physical Size Used by Snapshot Copies: 928KB
              Snapshot Volume Data Reduction Ratio: 4540.87:1
            Logical Size Used by FlexClone Volumes: 3.01GB
          Physical Sized Used by FlexClone Volumes: 2.01GB
             FlexClone Volume Data Reduction Ratio: 1.50:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.49:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 2

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     854.6GB   861.8GB 7.21GB   6.16GB        1%                    0B                          0%                                  0B                   0B                           0B              0%                 0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              7.19GB         1%
      Aggregate Metadata                            15.78MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    52.56GB         6%

      Total Physical Used                            6.16GB         1%


      Total Provisioned Space                         257GB        28%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW

::*> volume clone show -instance

                              Vserver Name: svm
                          FlexClone Volume: vol1_clone
                            FlexClone Type: RW
                  FlexClone Parent Vserver: svm
                   FlexClone Parent Volume: vol1
                 FlexClone Parent Snapshot: clone_vol1_clone.2024-04-07_020316.0
                    FlexClone Volume State: online
                             Junction Path: /vol1_clone
                           Junction Active: true
                     Space Guarantee Style: none
                 Space Guarantee In Effect: true
                       FlexClone Aggregate: aggr1
                     FlexClone Data Set ID: 1027
              FlexClone Master Data Set ID: 2163179381
                            FlexClone Size: 128GB
                                 Used Size: 3.01GB
                            Split Estimate: 1.00GB
                            Blocks Scanned: -
                            Blocks Updated: -
                                   Comment:
                     QoS Policy Group Name: -
            QoS Adaptive Policy Group Name: -
                       Caching Policy Name: -
                        Parent volume type: READ_WRITE
Inherited Physical Used from Base Snapshot: 1.00GB
      Inherited Savings from Base Snapshot: 0B
                 FlexClone Used Percentage: 2%
                     Vserver DR Protection: -
                       Percentage Complete: -
                          Volume-Level UID: -
                          Volume-Level GID: -
                     UUID of the FlexGroup: -
              FlexGroup Master Data Set ID: -
                           FlexGroup Index: -
   Maximum size of a FlexGroup Constituent: -
                   Constituent Volume Role: -
           Is Active FlexGroup Constituent: true
                     Is Constituent Volume: false
                     Is Volume a FlexGroup: false
                     Extended Volume Style: flexvol
                          Type of Workflow: auto
                             SnapLock Type: non-snaplock

FlexCloneのネスト

次にFlexCloneのネストをしてみます。

FlexCloneボリュームからさらにFlexCloneボリュームを作成します。

::*> volume clone create -parent-volume vol1_clone -flexclone vol1_clone_clone -junction-path /vol1_clone_clone
[Job 48] Job succeeded: Successful

FlexCloneボリュームのネスト後のボリューム、aggregateの状態は以下のとおりです。問題なくFlexCloneボリュームが作成されています。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 117.6GB   128GB           121.6GB 4.02GB 3%           0B                 0%                         0B                  4.02GB        3%                    4.02GB       3% -                 4.02GB              none           0B                                  0%
svm     vol1_clone
               128GB 118.6GB   128GB           121.6GB 3.01GB 2%           0B                 0%                         0B                  2.01GB        2%                    3.01GB       2% -                 3.01GB              none           0B                                  0%
svm     vol1_clone_clone
               128GB 118.6GB   128GB           121.6GB 3.01GB 2%           0B                 0%                         0B                  1.13MB        0%                    3.01GB       2% -                 3.01GB              none           0B                                  0%
3 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId009351b227391d1f1-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 20.08GB
                               Total Physical Used: 6.02GB
                    Total Storage Efficiency Ratio: 3.33:1
Total Data Reduction Logical Used Without Snapshots: 10.04GB
Total Data Reduction Physical Used Without Snapshots: 6.02GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.67:1
Total Data Reduction Logical Used without snapshots and flexclones: 4.01GB
Total Data Reduction Physical Used without snapshots and flexclones: 4.01GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 20.09GB
Total Physical Used in FabricPool Performance Tier: 6.05GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 3.32:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.02GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.04GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 10.04GB
               Physical Space Used for All Volumes: 10.04GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 6.02GB
              Physical Space Used by the Aggregate: 6.02GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 10.04GB
             Physical Size Used by Snapshot Copies: 1.41MB
              Snapshot Volume Data Reduction Ratio: 7274.14:1
            Logical Size Used by FlexClone Volumes: 6.03GB
          Physical Sized Used by FlexClone Volumes: 2.01GB
             FlexClone Volume Data Reduction Ratio: 2.99:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 7.98:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 3

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     854.5GB   861.8GB 7.25GB   6.17GB        1%                    0B                          0%                                  0B                   0B                           0B              0%                 0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                              7.23GB         1%
      Aggregate Metadata                            16.35MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    52.61GB         6%

      Total Physical Used                            6.17GB         1%


      Total Provisioned Space                         385GB        42%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW
        vol1_clone_clone
                      svm     vol1_clone    clone_vol1_clone_clone.2024-04-07_021551.0
                                                                 online    RW
2 entries were displayed.

::*> volume clone show -instance

                              Vserver Name: svm
                          FlexClone Volume: vol1_clone
                            FlexClone Type: RW
                  FlexClone Parent Vserver: svm
                   FlexClone Parent Volume: vol1
                 FlexClone Parent Snapshot: clone_vol1_clone.2024-04-07_020316.0
                    FlexClone Volume State: online
                             Junction Path: /vol1_clone
                           Junction Active: true
                     Space Guarantee Style: none
                 Space Guarantee In Effect: true
                       FlexClone Aggregate: aggr1
                     FlexClone Data Set ID: 1027
              FlexClone Master Data Set ID: 2163179381
                            FlexClone Size: 128GB
                                 Used Size: 3.01GB
                            Split Estimate: 1.00GB
                            Blocks Scanned: -
                            Blocks Updated: -
                                   Comment:
                     QoS Policy Group Name: -
            QoS Adaptive Policy Group Name: -
                       Caching Policy Name: -
                        Parent volume type: READ_WRITE
Inherited Physical Used from Base Snapshot: 1.00GB
      Inherited Savings from Base Snapshot: 0B
                 FlexClone Used Percentage: 2%
                     Vserver DR Protection: -
                       Percentage Complete: -
                          Volume-Level UID: -
                          Volume-Level GID: -
                     UUID of the FlexGroup: -
              FlexGroup Master Data Set ID: -
                           FlexGroup Index: -
   Maximum size of a FlexGroup Constituent: -
                   Constituent Volume Role: -
           Is Active FlexGroup Constituent: true
                     Is Constituent Volume: false
                     Is Volume a FlexGroup: false
                     Extended Volume Style: flexvol
                          Type of Workflow: auto
                             SnapLock Type: non-snaplock

                              Vserver Name: svm
                          FlexClone Volume: vol1_clone_clone
                            FlexClone Type: RW
                  FlexClone Parent Vserver: svm
                   FlexClone Parent Volume: vol1_clone
                 FlexClone Parent Snapshot: clone_vol1_clone_clone.2024-04-07_021551.0
                    FlexClone Volume State: online
                             Junction Path: /vol1_clone_clone
                           Junction Active: true
                     Space Guarantee Style: none
                 Space Guarantee In Effect: true
                       FlexClone Aggregate: aggr1
                     FlexClone Data Set ID: 1028
              FlexClone Master Data Set ID: 2163179382
                            FlexClone Size: 128GB
                                 Used Size: 3.01GB
                            Split Estimate: 3.02GB
                            Blocks Scanned: -
                            Blocks Updated: -
                                   Comment:
                     QoS Policy Group Name: -
            QoS Adaptive Policy Group Name: -
                       Caching Policy Name: -
                        Parent volume type: READ_WRITE
Inherited Physical Used from Base Snapshot: 3.01GB
      Inherited Savings from Base Snapshot: 0B
                 FlexClone Used Percentage: 2%
                     Vserver DR Protection: -
                       Percentage Complete: -
                          Volume-Level UID: -
                          Volume-Level GID: -
                     UUID of the FlexGroup: -
              FlexGroup Master Data Set ID: -
Press <space> to page down, <return> for next line, or 'q' to quit...
2 entries were displayed.

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     168KB     0%    0%
                  hourly.2024-04-07_0205                   208KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     224KB     0%    0%
                  hourly.2024-04-07_0205                   200KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           172KB     0%    0%
         vol1_clone_clone
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           316KB     0%    0%
6 entries were displayed.

ネストしたFlexCloneボリュームにデータを追加書き込みします。

$ sudo mkdir -p /mnt/fsxn/vol1_clone_clone
$ sudo mount -t nfs svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1_clone_clone /mnt/fsxn/vol1_clone_clone
$ df -hT -t nfs4
Filesystem                                                                               Type  Size  Used Avail Use% Mounted on
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1             nfs4  122G  4.1G  118G   4% /mnt/fsxn/vol1
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1_clone       nfs4  122G  3.1G  119G   3% /mnt/fsxn/vol1_clone
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1_clone_clone nfs4  122G  3.1G  119G   3% /mnt/fsxn/vol1_clone_clone

$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1_clone_clone/random_pattern_binary_block_5GiB bs=1M count=5120
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 35.1638 s, 153 MB/s

$ df -hT -t nfs4
Filesystem                                                                               Type  Size  Used Avail Use% Mounted on
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1             nfs4  122G  4.1G  118G   4% /mnt/fsxn/vol1
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1_clone       nfs4  122G  3.1G  119G   3% /mnt/fsxn/vol1_clone
svm-0365ba78d7ad91348.fs-009351b227391d1f1.fsx.us-east-1.amazonaws.com:/vol1_clone_clone nfs4  122G  8.1G  114G   7% /mnt/fsxn/vol1_clone_clone

$ ls -lR /mnt/fsxn/vol1*
/mnt/fsxn/vol1:
total 4210836
-rw-r--r--. 1 root root 1073741824 Apr  7 01:53 random_pattern_binary_block_1GiB
-rw-r--r--. 1 root root 3221225472 Apr  7 02:12 random_pattern_binary_block_3GiB

/mnt/fsxn/vol1_clone:
total 3158132
-rw-r--r--. 1 root root 1073741824 Apr  7 01:53 random_pattern_binary_block_1GiB
-rw-r--r--. 1 root root 2147483648 Apr  7 02:09 random_pattern_binary_block_2GiB

/mnt/fsxn/vol1_clone_clone:
total 8421664
-rw-r--r--. 1 root root 1073741824 Apr  7 01:53 random_pattern_binary_block_1GiB
-rw-r--r--. 1 root root 2147483648 Apr  7 02:09 random_pattern_binary_block_2GiB
-rw-r--r--. 1 root root 5368709120 Apr  7 02:19 random_pattern_binary_block_5GiB

ボリューム、aggregateの状態は以下のとおりです。書き込まれたデータサイズ分だけ物理消費量が増加していることが分かります。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 117.6GB   128GB           121.6GB 4.02GB 3%           0B                 0%                         0B                  4.02GB        3%                    4.02GB       3% -                 4.02GB              none           0B                                  0%
svm     vol1_clone
               128GB 118.6GB   128GB           121.6GB 3.01GB 2%           0B                 0%                         0B                  2.01GB        2%                    3.01GB       2% -                 3.01GB              none           0B                                  0%
svm     vol1_clone_clone
               128GB 113.6GB   128GB           121.6GB 8.03GB 6%           0B                 0%                         0B                  5.03GB        4%                    8.03GB       7% -                 8.03GB              none           0B                                  0%
3 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId009351b227391d1f1-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 25.10GB
                               Total Physical Used: 11.04GB
                    Total Storage Efficiency Ratio: 2.27:1
Total Data Reduction Logical Used Without Snapshots: 15.06GB
Total Data Reduction Physical Used Without Snapshots: 11.04GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.36:1
Total Data Reduction Logical Used without snapshots and flexclones: 4.01GB
Total Data Reduction Physical Used without snapshots and flexclones: 4.00GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 25.11GB
Total Physical Used in FabricPool Performance Tier: 11.09GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.26:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.02GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.05GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 15.06GB
               Physical Space Used for All Volumes: 15.06GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 11.04GB
              Physical Space Used by the Aggregate: 11.04GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 10.04GB
             Physical Size Used by Snapshot Copies: 1.46MB
              Snapshot Volume Data Reduction Ratio: 7059.62:1
            Logical Size Used by FlexClone Volumes: 11.05GB
          Physical Sized Used by FlexClone Volumes: 7.04GB
             FlexClone Volume Data Reduction Ratio: 1.57:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.00:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 3

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     849.5GB   861.8GB 12.29GB  11.23GB       1%                    0B                          0%                                  0B                   0B                           0B              0%                 0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             12.26GB         1%
      Aggregate Metadata                            30.16MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    57.64GB         6%

      Total Physical Used                           11.23GB         1%


      Total Provisioned Space                         385GB        42%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW
        vol1_clone_clone
                      svm     vol1_clone    clone_vol1_clone_clone.2024-04-07_021551.0
                                                                 online    RW
2 entries were displayed.

::*> volume clone show -instance

                              Vserver Name: svm
                          FlexClone Volume: vol1_clone
                            FlexClone Type: RW
                  FlexClone Parent Vserver: svm
                   FlexClone Parent Volume: vol1
                 FlexClone Parent Snapshot: clone_vol1_clone.2024-04-07_020316.0
                    FlexClone Volume State: online
                             Junction Path: /vol1_clone
                           Junction Active: true
                     Space Guarantee Style: none
                 Space Guarantee In Effect: true
                       FlexClone Aggregate: aggr1
                     FlexClone Data Set ID: 1027
              FlexClone Master Data Set ID: 2163179381
                            FlexClone Size: 128GB
                                 Used Size: 3.01GB
                            Split Estimate: 1.00GB
                            Blocks Scanned: -
                            Blocks Updated: -
                                   Comment:
                     QoS Policy Group Name: -
            QoS Adaptive Policy Group Name: -
                       Caching Policy Name: -
                        Parent volume type: READ_WRITE
Inherited Physical Used from Base Snapshot: 1.00GB
      Inherited Savings from Base Snapshot: 0B
                 FlexClone Used Percentage: 2%
                     Vserver DR Protection: -
                       Percentage Complete: -
                          Volume-Level UID: -
                          Volume-Level GID: -
                     UUID of the FlexGroup: -
              FlexGroup Master Data Set ID: -
                           FlexGroup Index: -
   Maximum size of a FlexGroup Constituent: -
                   Constituent Volume Role: -
           Is Active FlexGroup Constituent: true
                     Is Constituent Volume: false
                     Is Volume a FlexGroup: false
                     Extended Volume Style: flexvol
                          Type of Workflow: auto
                             SnapLock Type: non-snaplock

                              Vserver Name: svm
                          FlexClone Volume: vol1_clone_clone
                            FlexClone Type: RW
                  FlexClone Parent Vserver: svm
                   FlexClone Parent Volume: vol1_clone
                 FlexClone Parent Snapshot: clone_vol1_clone_clone.2024-04-07_021551.0
                    FlexClone Volume State: online
                             Junction Path: /vol1_clone_clone
                           Junction Active: true
                     Space Guarantee Style: none
                 Space Guarantee In Effect: true
                       FlexClone Aggregate: aggr1
                     FlexClone Data Set ID: 1028
              FlexClone Master Data Set ID: 2163179382
                            FlexClone Size: 128GB
                                 Used Size: 8.03GB
                            Split Estimate: 3.01GB
                            Blocks Scanned: -
                            Blocks Updated: -
                                   Comment:
                     QoS Policy Group Name: -
            QoS Adaptive Policy Group Name: -
                       Caching Policy Name: -
                        Parent volume type: READ_WRITE
Inherited Physical Used from Base Snapshot: 3.01GB
      Inherited Savings from Base Snapshot: 0B
                 FlexClone Used Percentage: 6%
                     Vserver DR Protection: -
                       Percentage Complete: -
                          Volume-Level UID: -
                          Volume-Level GID: -
                     UUID of the FlexGroup: -
              FlexGroup Master Data Set ID: -
Press <space> to page down, <return> for next line, or 'q' to quit...
2 entries were displayed.

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     168KB     0%    0%
                  hourly.2024-04-07_0205                   208KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     224KB     0%    0%
                  hourly.2024-04-07_0205                   200KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           184KB     0%    0%
         vol1_clone_clone
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           348KB     0%    0%
6 entries were displayed.

FlexCloneのスプリット

FlexClone元ボリュームを削除する際にはFlexCloneを分割(スプリット)させる必要があります。

試しにvol1_cloneというFlexCloneボリュームとvol1をスプリットさせます。

スプリットをすると物理ストレージ消費量が増加することが見込まれます。volume clone split estimateを実行して、どのぐらい増加するのか、空きは十分かチェックしておきましょう。

::*> volume clone split estimate
                             Split
Vserver   FlexClone       Estimate
--------- ------------- ----------
svm       vol1_clone        1.00GB
          vol1_clone_clone  3.01GB
2 entries were displayed.

注意事項は以下のとおりです。

  • FlexCloneボリュームの新しいSnapshotコピーは、スプリット処理中は作成できません。
  • データ保護関係に属しているFlexCloneボリュームや負荷共有ミラーに属しているFlexCloneボリュームは、親ボリュームからスプリットすることはできません。
  • スプリットの実行中にFlexCloneボリュームをオフラインにすると、スプリット処理が中断されます。FlexCloneボリュームをオンラインに戻すと、スプリット処理が再開されます。
  • スプリットの実行後、親FlexVolボリュームとクローンの両方で、それぞれのボリュームギャランティに基づいたスペースの完全な割り当てが必要になります。
  • FlexCloneボリュームを親ボリュームからスプリットしたあとは、この2つを再び結合することはできません。
  • ONTAP 9.4 以降では、 AFF システム上のボリュームのギャランティが none である場合、 FlexClone ボリュームのスプリット処理では物理ブロックが共有され、データはコピーされません。そのため、ONTAP 9.4以降では、AFFシステムのFlexCloneボリュームのスプリットは、他のFASシステムのFlexCloneスプリット処理よりも高速です。AFF システムでの FlexClone スプリット処理の向上には、次の利点があります。
    • 親からクローンをスプリットしたあともストレージ効率が維持されます。
    • 既存の Snapshot コピーは削除されません。
    • 処理時間が短縮されます。
    • FlexClone ボリュームをクローン階層の任意のポイントからスプリットできます。

FlexClone ボリュームを親ボリュームからスプリットします

実際に試してみます。

::*> volume clone split start -flexclone vol1_clone

Warning: Are you sure you want to split clone volume vol1_clone in Vserver svm ? {y|n}: y
[Job 49] Job is queued: Split vol1_clone.

::*> volume clone split show
This table is currently empty.

::*> volume show -volume vol1* -fields clone-volume
vserver volume clone-volume
------- ------ ------------
svm     vol1   false
svm     vol1_clone
               true
svm     vol1_clone_clone
               true
3 entries were displayed.

1::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW
        vol1_clone_clone
                      svm     vol1_clone    clone_vol1_clone_clone.2024-04-07_021551.0
                                                                 online    RW
2 entries were displayed.

::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -application ssh -state Error|Success
timestamp                  node                      application vserver                username input         state message
-------------------------- ------------------------- ----------- ---------------------- -------- ------------- ----- ---------------------------------------------
"Sun Apr 07 01:50:34 2024" FsxId009351b227391d1f1-01 ssh         FsxId009351b227391d1f1 fsxadmin Login Attempt Error Your privileges has changed since last login.
"Sun Apr 07 01:50:34 2024" FsxId009351b227391d1f1-01 ssh         FsxId009351b227391d1f1 fsxadmin Logging in    Success
                                                                                                                     -
"Sun Apr 07 01:51:10 2024" FsxId009351b227391d1f1-01 ssh         FsxId009351b227391d1f1 fsxadmin Question: Warning: These diagnostic command... : y
                                                                                                               Success
                                                                                                                     -
"Sun Apr 07 01:51:10 2024" FsxId009351b227391d1f1-01 ssh         FsxId009351b227391d1f1 fsxadmin set diag      Success
                                                                                                                     -
"Sun Apr 07 02:03:21 2024" FsxId009351b227391d1f1-01 ssh         FsxId009351b227391d1f1 fsxadmin volume clone create -parent-volume vol1 -flexclone vol1_clone -junction-path /vol1_clone
                                                                                                               Success
                                                                                                                     -
"Sun Apr 07 02:15:56 2024" FsxId009351b227391d1f1-01 ssh         FsxId009351b227391d1f1 fsxadmin volume clone create -parent-volume vol1_clone -flexclone vol1_clone_clone -junction-path /vol1_clone_clone
                                                                                                               Success
                                                                                                                     -
"Sun Apr 07 02:28:07 2024" FsxId009351b227391d1f1-01 ssh         FsxId009351b227391d1f1 fsxadmin Question: Are you sure you want to split cl... : y
                                                                                                               Success
                                                                                                                     -
"Sun Apr 07 02:28:07 2024" FsxId009351b227391d1f1-01 ssh         FsxId009351b227391d1f1 fsxadmin volume clone split start -flexclone vol1_clone
                                                                                                               Success
                                                                                                                     -
8 entries were displayed.

正常に実行が受け付けられましたが、いくら待てどもスプリットが行われません。もしかすると、FlexCloneをネストさせている場合、中間のクローンボリュームをスプリットさせることはできないのでしょうか。

vol1_clone_clonevol1_cloneをスプリットさせます。

::*> volume clone split start -flexclone vol1_clone_clone

Warning: Are you sure you want to split clone volume vol1_clone_clone in Vserver svm ? {y|n}: y
[Job 51] Job is queued: Split vol1_clone_clone.

::*> volume clone split show
                                Blocks
                         ------------------
Vserver   FlexClone       Scanned  Updated     % Complete
--------- -------------  -------- --------      --------
svm       vol1_clone_clone
                           201127   198786             9

::*> volume clone split show -instance

       Vserver Name: svm
   FlexClone Volume: vol1_clone_clone
Percentage Complete: 21
     Blocks Scanned: 458687
     Blocks Updated: 456346

::*> volume clone split show
                                Blocks
                         ------------------
Vserver   FlexClone       Scanned  Updated     % Complete
--------- -------------  -------- --------      --------
svm       vol1_clone_clone
                           651995   649654            30

::*> volume clone split show
                                Blocks
                         ------------------
Vserver   FlexClone       Scanned  Updated     % Complete
--------- -------------  -------- --------      --------
svm       vol1_clone_clone
                          2106795   786661            99

::*> volume clone split show
This table is currently empty.

::*> volume show -volume vol1* -fields clone-volume
vserver volume clone-volume
------- ------ ------------
svm     vol1   false
svm     vol1_clone
               true
svm     vol1_clone_clone
               false
3 entries were displayed.

今度はスプリットが行われました。やはり、FlexCloneをネストさせている場合、中間のクローンボリュームをスプリットさせることはできないようです。

スプリット後のボリューム、aggregate、Snapshotの状態は以下のとおりです。vol_clone_cloneの物理データ消費量は5.03GBから8.03とvolume clone split estimateの結果分増加していることが分かります。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent, tiering-policy
vserver volume size  available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs tiering-policy performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- -------------- ----------------------------------- -------------------------------------------
svm     vol1   128GB 117.6GB   128GB           121.6GB 4.02GB 3%           0B                 0%                         0B                  4.02GB        3%                    4.02GB       3% -                 4.02GB              none           0B                                  0%
svm     vol1_clone
               128GB 118.6GB   128GB           121.6GB 3.01GB 2%           0B                 0%                         0B                  2.01GB        2%                    3.01GB       2% -                 3.01GB              none           0B                                  0%
svm     vol1_clone_clone
               128GB 113.6GB   128GB           121.6GB 8.03GB 6%           0B                 0%                         0B                  8.03GB        6%                    8.03GB       7% -                 8.03GB              none           0B                                  0%
3 entries were displayed.

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId009351b227391d1f1-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 22.09GB
                               Total Physical Used: 14.06GB
                    Total Storage Efficiency Ratio: 1.57:1
Total Data Reduction Logical Used Without Snapshots: 15.06GB
Total Data Reduction Physical Used Without Snapshots: 14.05GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.07:1
Total Data Reduction Logical Used without snapshots and flexclones: 12.05GB
Total Data Reduction Physical Used without snapshots and flexclones: 12.04GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 22.10GB
Total Physical Used in FabricPool Performance Tier: 14.11GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.57:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 12.05GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 12.10GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1
                Logical Space Used for All Volumes: 15.06GB
               Physical Space Used for All Volumes: 15.06GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 14.06GB
              Physical Space Used by the Aggregate: 14.06GB
           Space Saved by Aggregate Data Reduction: 0B
                 Aggregate Data Reduction SE Ratio: 1.00:1
              Logical Size Used by Snapshot Copies: 7.03GB
             Physical Size Used by Snapshot Copies: 1.12MB
              Snapshot Volume Data Reduction Ratio: 6445.31:1
            Logical Size Used by FlexClone Volumes: 3.01GB
          Physical Sized Used by FlexClone Volumes: 2.01GB
             FlexClone Volume Data Reduction Ratio: 1.50:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 4.99:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 1
         Number of SIS Change Log Disabled Volumes: 3

::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp
aggregate availsize size    usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp
--------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- ---------------------------------------
aggr1     846.4GB   861.8GB 15.38GB  14.33GB       2%                    0B                          0%                                  0B                   0B                           0B              0%                 0B               -

::*> aggr show-space

      Aggregate : aggr1
      Performance Tier
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Volume Footprints                             15.36GB         2%
      Aggregate Metadata                            25.44MB         0%
      Snapshot Reserve                              45.36GB         5%
      Total Used                                    60.74GB         7%

      Total Physical Used                           14.33GB         2%


      Total Provisioned Space                         385GB        42%


      Aggregate : aggr1
      Object Store: FSxFabricpoolObjectStore
      Feature                                          Used      Used%
      --------------------------------           ----------     ------
      Logical Used                                       0B          -
      Logical Referenced Capacity                        0B          -
      Logical Unreferenced Capacity                      0B          -

      Total Physical Used                                0B          -



2 entries were displayed.

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW

::*> volume clone show -instance

                              Vserver Name: svm
                          FlexClone Volume: vol1_clone
                            FlexClone Type: RW
                  FlexClone Parent Vserver: svm
                   FlexClone Parent Volume: vol1
                 FlexClone Parent Snapshot: clone_vol1_clone.2024-04-07_020316.0
                    FlexClone Volume State: online
                             Junction Path: /vol1_clone
                           Junction Active: true
                     Space Guarantee Style: none
                 Space Guarantee In Effect: true
                       FlexClone Aggregate: aggr1
                     FlexClone Data Set ID: 1027
              FlexClone Master Data Set ID: 2163179381
                            FlexClone Size: 128GB
                                 Used Size: 3.01GB
                            Split Estimate: 1GB
                            Blocks Scanned: -
                            Blocks Updated: -
                                   Comment:
                     QoS Policy Group Name: -
            QoS Adaptive Policy Group Name: -
                       Caching Policy Name: -
                        Parent volume type: READ_WRITE
Inherited Physical Used from Base Snapshot: 1.00GB
      Inherited Savings from Base Snapshot: 0B
                 FlexClone Used Percentage: 2%
                     Vserver DR Protection: -
                       Percentage Complete: -
                          Volume-Level UID: -
                          Volume-Level GID: -
                     UUID of the FlexGroup: -
              FlexGroup Master Data Set ID: -
                           FlexGroup Index: -
   Maximum size of a FlexGroup Constituent: -
                   Constituent Volume Role: -
           Is Active FlexGroup Constituent: true
                     Is Constituent Volume: false
                     Is Volume a FlexGroup: false
                     Extended Volume Style: flexvol
                          Type of Workflow: auto
                             SnapLock Type: non-snaplock

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     168KB     0%    0%
                  hourly.2024-04-07_0205                   208KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     224KB     0%    0%
                  hourly.2024-04-07_0205                   200KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           184KB     0%    0%
5 entries were displayed.

SnapMirrorとFlexCloneの相性確認

転送元がFlexClone元ボリュームとするSnapMirrorの実行

NetApp公式ドキュメントを眺めていると、「SnapMirrorの転送元ボリュームからFlexCloneを作成すると、正常にSnapMirrorの転送がされないケースがある」と記載がありました。

既存の Volume SnapMirror 関係にあるソースボリュームまたはデスティネーションボリュームから FlexClone ボリュームを作成できます。ただし、これを行うと、以降に行う SnapMirror のレプリケーション処理が正常に完了しないことがあります。

FlexClone ボリュームを作成すると、 SnapMirror によって使用される Snapshot コピーがロックされる可能性があるため、レプリケーションが機能しないことがあります。この場合、 FlexClone ボリュームが削除されるか、親ボリュームからスプリットされるまで、 SnapMirror はデスティネーションボリュームへのレプリケーションを停止します。この問題には、次の 2 つの方法で対処できます。

  • FlexClone ボリュームが一時的に必要で、 SnapMirror レプリケーションが一時的に停止されても構わない場合は、 FlexClone ボリュームを作成し、可能となった時点で削除するか親からスプリットします。
  • FlexClone ボリュームが削除されるか親からスプリットされた時点で、 SnapMirror レプリケーションが正常に続行されます。
  • SnapMirror レプリケーションの一時的な停止を許容できない場合は、 SnapMirror ソースボリュームで Snapshot コピーを作成し、その Snapshot コピーを使用して FlexClone ボリュームを作成します。( FlexClone ボリュームをデスティネーションボリュームから作成している場合、 Snapshot コピーが SnapMirror デスティネーションボリュームにレプリケートされるまで待機する必要があります)。

この方法で SnapMirror ソースボリューム内に Snapshot コピーを作成すると、 SnapMirror によって使用されている Snapshot コピーをロックすることなくクローンを作成できます。

SnapMirror のソースボリュームまたはデスティネーションボリュームから FlexClone ボリュームを作成する際の考慮事項

転送元をFlexClone元ボリュームとするSnapMirrorを試してみます。

::*> snapmirror protect -path-list svm:vol1 -destination-vserver svm -policy MirrorAllSnapshots -auto-initialize true
[Job 52] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol1".

::*> snapmirror show
This table is currently empty.

::*> job show 52
                            Owning
Job ID Name                 Vserver    Node           State
------ -------------------- ---------- -------------- ----------
52     SnapMirror protect   svm        FsxId009351b227391d1f1-02
                                                      Failure
       Description: snapmirror protect for list of source endpoints beginning with "svm:vol1"

::*> job show 52 -instance

                      Job ID: 52
              Owning Vserver: svm
                        Name: SnapMirror protect
                 Description: snapmirror protect for list of source endpoints beginning with "svm:vol1"
                    Priority: Medium
                        Node: FsxId009351b227391d1f1-02
                    Affinity: Cluster
                    Schedule: @now
                  Queue Time: 04/07 05:57:23
                  Start Time: 04/07 05:57:26
                    End Time: 04/07 05:57:26
              Drop-dead Time: -
                  Restarted?: false
                       State: Failure
                 Status Code: 1
           Completion String: Out of 1 endpoints, 0 endpoints have been protected and protection has failed for 1 endpoints.
                        List of paths that failed protection and reasons for failure:
                        svm:vol1 : Failed to identify aggregate on which to create volume "vol1_dst". (No eligible aggregate to place the volume. Make sure that a non-root, non-taken-over, non-snaplock, non-composite aggregate with enough free space exists at the destination cluster. If aggr-list is configured for the Vserver make sure that an eligible aggregate is present in that list. Verify that the number of volumes is less than the supported maximum on the nodes in the destination cluster.)
                    Job Type: SnapMirror-ProtectV1
                Job Category: SnapMirror
                        UUID: b3ece0ff-f4a3-11ee-8e23-0fe9d475982c
          Execution Progress: Complete: Out of 1 endpoints, 0 endpoints have been protected and protection has failed for 1 endpoints.
                        List of paths that failed protection and reasons for failure:
                        svm:vol1 : Failed to identify aggregate on which to create volume "vol1_dst". (No eligible aggregate to place the volume. Make sure that a non-root, non-taken-over, non-snaplock, non-composite aggregate with enough free space exists at the destination cluster. If aggr-list is configured for the Vserver make sure that an eligible aggregate is present in that list. Verify that the number of volumes is less than the supported maximum on the nodes in the destination cluster.)
 [1]
                   User Name: fsxadmin
                     Process: mgwd
  Restart Is or Was Delayed?: false
Restart Is Delayed by Module: -

snapmirror protectによるSnapMirror relastionshipの作成に失敗しました。aggregateの特定ができないとのことでしたが、他の検証では同様のコマンドで成功したので謎です。

手動でボリュームの作成、SnapMirror relastionshipの作成、initializeを行います。

::*> volume create -vserver svm -volume vol1_dst -aggregate aggr1 -state online -type DP -size 4GB -tiering-policy none -autosize-mode grow
[Job 54] Job succeeded: Successful

::*> snapmirror create -source-path svm:vol1 -destination-path svm:vol1_dst -policy MirrorAllSnapshots
Operation succeeded: snapmirror create for the relationship with destination "svm:vol1_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Uninitialized
                                      Idle           -         true    -

::*> snapmirror initialize -destination-path svm:vol1_dst
Operation is queued: snapmirror initialize of destination "svm:vol1_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Uninitialized
                                      Transferring   0B        true    04/07 06:09:55

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Uninitialized
                                      Finalizing     1.02GB    true    04/07 06:10:05

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Snapmirrored
                                      Transferring   0B        true    04/07 06:10:13

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Snapmirrored
                                      Transferring   1.90GB    true    04/07 06:10:21

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Snapmirrored
                                      Finalizing     1.90GB    true    04/07 06:10:21

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Snapmirrored
                                      Finalizing     3.07GB    true    04/07 06:10:36

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Snapmirrored
                                      Idle           -         true    -

転送が完了しました。特にエラーにはなっていません。

FlexCloneやSnapshotの状態も確認します。特にエラーはなっていませんね。

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_060955
                                                           140KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     384KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_060955
                                                           172KB     0%    0%
6 entries were displayed.

差分同期もしてみます。

::*> snapmirror update -destination-path svm:vol1_dst
Operation is queued: snapmirror update of destination "svm:vol1_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm:vol1_dst Snapmirrored
                                      Idle           -         true    -

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061310
                                                           148KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     384KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_060955
                                                           320KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061310
                                                           156KB     0%    0%
7 entries were displayed.

特に問題なく、転送が完了します。

その後、繰り返し差分同期をしましたが、いずれも成功しています。

::*> snapmirror show-history

Destination Source                Start       End
Path        Path        Operation Time        Time        Result
----------- ----------- --------- ----------- ----------- -------
svm:vol1_dst
            svm:vol1    manual-update
                                  4/7/2024 06:14:14
                                              4/7/2024 06:14:16
                                                          success
svm:vol1_dst
            svm:vol1    manual-update
                                  4/7/2024 06:13:10
                                              4/7/2024 06:13:12
                                                          success
svm:vol1_dst
            svm:vol1    scheduled-update
                                  4/7/2024 06:09:55
                                              4/7/2024 06:10:46
                                                          success
svm:vol1_dst
            svm:vol1    initialize
                                  4/7/2024 06:09:55
                                              4/7/2024 06:10:13
                                                          success
svm:vol1_dst
            svm:vol1    create    4/7/2024 06:09:18
                                              4/7/2024 06:09:18
                                                          success
5 entries were displayed.

そのため、転送元がFlexClone元ボリュームとするSnapMirrorを実行するのは問題ないようです。

SnapMirror relastionship作成後にFlexCloneをした場合

先ほどのパターンはFlexCloneでクローンボリュームを作成した後に、クローン元ボリュームが転送元になるSnapMirrorを実行するパターンでした。

それでは、SnapMirror relastionship作成後にFlexCloneをした場合、SnapMirrorの転送は正常に行えるでしょうか。

vol1をクローン元とするFlexCloneボリュームを作成します。

::*> volume clone create -parent-volume vol1 -flexclone vol1_clone2 -junction-path /vol1_clone2
[Job 56] Job succeeded: Successful

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061414
                                                           148KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    140KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_clone2
                  clone_vol1_clone2.2024-04-07_061613.0    996KB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     388KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061310
                                                           280KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061414
                                                           156KB     0%    0%
9 entries were displayed.

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW
        vol1_clone2   svm     vol1          clone_vol1_clone2.2024-04-07_061613.0
                                                                 online    RW
2 entries were displayed.

SnapMirrorの差分転送をします。

::*> snapmirror update -destination-path svm:vol1_dst
Operation is queued: snapmirror update of destination "svm:vol1_dst".

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    144KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           148KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_clone2
                  clone_vol1_clone2.2024-04-07_061613.0   1.09MB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     388KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061414
                                                           232KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    232KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           156KB     0%    0%
10 entries were displayed.

::*> snapmirror show-history

Destination Source                Start       End
Path        Path        Operation Time        Time        Result
----------- ----------- --------- ----------- ----------- -------
svm:vol1_dst
            svm:vol1    manual-update
                                  4/7/2024 06:17:06
                                              4/7/2024 06:17:10
                                                          success
.
.
(以下略)
.
.

問題なく転送できましたね。

SnapMirror relastionship作成後にFlexCloneをした場合 (sm_createdのSnapshotを指定)

それでは、FlexCloneで使用するSnapshotをSnapMirrorによって作成されたもの(sm_created)を指定した場合はどうでしょうか。sm_createdのSnapshotがロックされてしまい、正常に転送できないのでしょうか。

vol1sm_createdをクローン元とするFlexCloneボリュームを作成します。

::*> volume clone create -parent-volume vol1 -flexclone vol1_clone3 -junction-path /vol1_clone3 -parent-snapshot snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
[Job 57] Job succeeded: Successful

::*> volume clone show
                      Parent  Parent        Parent
Vserver FlexClone     Vserver Volume        Snapshot             State     Type
------- ------------- ------- ------------- -------------------- --------- ----
svm     vol1_clone    svm     vol1          clone_vol1_clone.2024-04-07_020316.0
                                                                 online    RW
        vol1_clone2   svm     vol1          clone_vol1_clone2.2024-04-07_061613.0
                                                                 online    RW
        vol1_clone3   svm     vol1          snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                                 online    RW
3 entries were displayed.

::*> snapshot show -volume vol1*                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    144KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           148KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_clone2
                  clone_vol1_clone2.2024-04-07_061613.0   1.09MB     0%    0%
         vol1_clone3
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                          1.09MB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     388KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061414
                                                           232KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    232KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           156KB     0%    0%
11 entries were displayed.

問題なくFlexCloneボリュームを作成できました。

SnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm:vol1_dstOperation is queued: snapmirror update of destination "svm:vol1_dst".

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    144KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           148KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061924
                                                           140KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_clone2
                  clone_vol1_clone2.2024-04-07_061613.0   1.09MB     0%    0%
         vol1_clone3
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                          1.12MB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     388KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    232KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           236KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061924
                                                           156KB     0%    0%
12 entries were displayed.

::*> snapmirror show-history -max-rows-per-relationship 1
Destination Source                Start       End
Path        Path        Operation Time        Time        Result
----------- ----------- --------- ----------- ----------- -------
svm:vol1_dst
            svm:vol1    manual-update
                                  4/7/2024 06:19:24
                                              4/7/2024 06:19:26
                                                          success

特にエラーにはなっていません。

差分転送を繰り返してFlexCloneボリューム作成時に使用したsnapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706がどうなるのか確認します。

::*> snapmirror update -destination-path svm:vol1_dst
Operation is queued: snapmirror update of destination "svm:vol1_dst".

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    144KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           148KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_062014
                                                           148KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_clone2
                  clone_vol1_clone2.2024-04-07_061613.0   1.09MB     0%    0%
         vol1_clone3
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                          1.12MB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     388KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    232KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           236KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061924
                                                           232KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_062014
                                                           156KB     0%    0%
13 entries were displayed.

::*> snapmirror show-history -max-rows-per-relationship 1

Destination Source                Start       End
Path        Path        Operation Time        Time        Result
----------- ----------- --------- ----------- ----------- -------
svm:vol1_dst
            svm:vol1    manual-update
                                  4/7/2024 06:20:14
                                              4/7/2024 06:20:16
                                                          success
::*> snapmirror update -destination-path svm:vol1_dst
Operation is queued: snapmirror update of destination "svm:vol1_dst".

::*> snapmirror show-history -max-rows-per-relationship 1

Destination Source                Start       End
Path        Path        Operation Time        Time        Result
----------- ----------- --------- ----------- ----------- -------
svm:vol1_dst
            svm:vol1    manual-update
                                  4/7/2024 06:21:17
                                              4/7/2024 06:21:19
                                                          success

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    144KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           148KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_062117
                                                           148KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_clone2
                  clone_vol1_clone2.2024-04-07_061613.0   1.09MB     0%    0%
         vol1_clone3
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                          1.12MB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     388KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    232KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           236KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_062014
                                                           236KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_062117
                                                           156KB     0%    0%
13 entries were displayed.

::*> snapmirror update -destination-path svm:vol1_dst
Operation is queued: snapmirror update of destination "svm:vol1_dst".

::*> snapmirror show-history -max-rows-per-relationship 1

Destination Source                Start       End
Path        Path        Operation Time        Time        Result
----------- ----------- --------- ----------- ----------- -------
svm:vol1_dst
            svm:vol1    manual-update
                                  4/7/2024 06:21:56
                                              4/7/2024 06:21:58
                                                          success

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  clone_vol1_clone.2024-04-07_020316.0     200KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    144KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           148KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_062156
                                                           148KB     0%    0%
         vol1_clone
                  clone_vol1_clone.2024-04-07_020316.0     252KB     0%    0%
                  clone_vol1_clone_clone.2024-04-07_021551.0
                                                           188KB     0%    0%
         vol1_clone2
                  clone_vol1_clone2.2024-04-07_061613.0   1.09MB     0%    0%
         vol1_clone3
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                          1.12MB     0%    0%
         vol1_dst
                  clone_vol1_clone.2024-04-07_020316.0     388KB     0%    0%
                  clone_vol1_clone2.2024-04-07_061613.0    232KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706
                                                           236KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_062117
                                                           236KB     0%    0%
                  snapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_062156
                                                           156KB     0%    0%
13 entries were displayed.

FlexCloneボリューム作成時に使用したsnapmirror.646592b9-f47f-11ee-8e23-0fe9d475982c_2163179383.2024-04-07_061706は削除されずに残っていますね。ロックされずに転送に失敗するということも発生していません。

私が検証をしている中ではSnapMirrorの転送が失敗するシチュエーションは確認できませんでした。

FlexCloneを有効活用して開発スピードアップやコスト削減に繋げよう

NetApp ONTAPのFlexCloneを紹介しました。

FlexCloneを有効活用することで、開発スピードアップやコスト削減に繋がりそうです。積極的に使っていきたいですね。

この記事が誰かの助けになれば幸いです。

以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!

この記事をシェアする

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.